Explaining adversarial vulnerability with a data sparsity hypothesis

نویسندگان

چکیده

Despite many proposed algorithms to provide robustness deep learning (DL) models, DL models remain susceptible adversarial attacks. We hypothesize that the vulnerability of stems from two factors. The first factor is data sparsity which in high dimensional input space, there exist large regions outside support distribution. second existence redundant parameters models. Owing these factors, different are able come up with decision boundaries comparably prediction accuracy. appearance space distribution does not affect accuracy model. However, it makes an important difference ideal boundary as far possible In this paper, we develop a training framework observe if learn such spanning around class distributions further points themselves. Semi-supervised was deployed during by leveraging unlabeled generated measured trained using against well-known attacks and metrics. found our framework, well other regularization methods hypothesis have more similar aforementioned boundary. show noise almost effective on sourced existing datasets or synthesis algorithms. code for available online.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Explaining and Harnessing Adversarial Examples

Several machine learning models, including neural networks, consistently misclassify adversarial examples—inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitti...

متن کامل

Adversarial vulnerability for any classifier

Despite achieving impressive and often superhuman performance on multiple benchmarks, state-of-the-art deep networks remain highly vulnerable to perturbations: adding small, imperceptible, adversarial perturbations can lead to very high error rates. Provided the data distribution is defined using a generative model mapping latent vectors to datapoints in the distribution, we prove that no class...

متن کامل

Adversarial Vulnerability of Neural Networks Increases With Input Dimension

Over the past four years, neural networks have proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions. We show that adversarial vulnerability increases with the gradients of the training objective when seen as a function of the inputs. For most current network architectures, we prove that the `1-norm of these gradients g...

متن کامل

Sparsity-based Defense against Adversarial Attacks on Linear Classifiers

Deep neural networks represent the state of the art in machine learning in a growing number of fields, including vision, speech and natural language processing. However, recent work raises important questions about the robustness of such architectures, by showing that it is possible to induce classification errors through tiny, almost imperceptible, perturbations. Vulnerability to such “adversa...

متن کامل

On Security and Sparsity of Linear Classifiers for Adversarial Settings

Machine-learning techniques are widely used in security-related applications, like spam and malware detection. However, in such settings, they have been shown to be vulnerable to adversarial attacks, including the deliberate manipulation of data at test time to evade detection. In this work, we focus on the vulnerability of linear classifiers to evasion attacks. This can be considered a relevan...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neurocomputing

سال: 2022

ISSN: ['0925-2312', '1872-8286']

DOI: https://doi.org/10.1016/j.neucom.2022.01.062